84 research outputs found

    A Cognitive Architecture for the Coordination of Utterances

    Get PDF
    Dialog partners coordinate with each other to reach a common goal. The analogy with other joint activities has sparked interesting observations (e.g., about the norms governing turn-taking) and has informed studies of linguistic alignment in dialog. However, the parallels between language and action have not been fully explored, especially with regard to the mechanisms that support moment-by-moment coordination during language use in conversation. We review the literature on joint actions to show (i) what sorts of mechanisms allow coordination and (ii) which types of experimental paradigms can be informative of the nature of such mechanisms. Regarding (i), there is converging evidence that the actions of others can be represented in the same format as one’s own actions. Furthermore, the predicted actions of others are taken into account in the planning of one’s own actions. Similarly, we propose that interlocutors are able to coordinate their acts of production because they can represent their partner’s utterances. They can then use these representations to build predictions, which they take into account when planning self-generated utterances. Regarding (ii), we propose a new methodology to study interactive language. Psycholinguistic tasks that have traditionally been used to study individual language production are distributed across two participants, who either produce two utterances simultaneously or complete each other’s utterances

    Predicting turn-ends in discourse context

    Get PDF
    Research suggests that during conversation, interlocutors coordinate their utterances by predicting the speaker’s forthcoming utterance and its end. In two experiments, we used a button-pressing task, in which participants pressed a button when they thought a speaker reached the end of their utterance, to investigate what role the wider discourse plays in turn-end prediction. Participants heard two-utterance sequences, in which the content of the second utterance was or was not constrained by the content of the first. In both experiments, participants responded earlier, but not more precisely, when the first utterance was constraining rather than unconstraining. Response times and precision were unaffected by whether they listened to dialogues or monologues (Experiment 1) and by whether they read the first utterance out loud or silently (Experiment 2), providing no indication that activation of production mechanisms facilitates prediction. We suggest that content predictions aid comprehension but not turn-end prediction

    Coordinating utterances during turn-taking:The role of prediction, response preparation, and articulation

    Get PDF
    During conversation, interlocutors rapidly switch between speaker and listener roles and take turns at talk. How do they achieve such fine coordination? Most research has concentrated on the role of prediction, but listeners must also prepare a response in advance (assuming they wish to respond) and articulate this response at the appropriate moment. Such mechanisms may overlap with the processes of comprehending the speaker’s incoming turn and predicting its end. However, little is known about the stages of response preparation and production. We discuss three questions pertaining to such stages: (1) Do listeners prepare their own response in advance?, (2) Can listeners buffer their prepared response?, and (3) Does buffering lead to interference with concurrent comprehension? We argue that fine coordination requires more than just an accurate prediction of the interlocutor’s incoming turn: Listeners must also simultaneously prepare their own response

    How do listeners time response articulation when answering questions? The role of speech rate

    Get PDF
    During conversation, interlocutors often produce their utterances with little overlap or gap between their turns. But what mechanism underlies this striking ability to time articulation appropriately? In 2 verbal yes/no question-answering experiments, we investigated whether listeners use the speech rate of questions to time articulation of their answers. In Experiment 1, we orthogonally manipulated the speech rate of the context (e.g., Do you have a . . .) and final word (e.g., dog?) of questions using time-compression, so that each component was spoken at the natural rate or twice as a fast. Listeners responded earlier when the context was speeded rather than natural, suggesting they used the speaker’s context rate to time answer articulation. Additionally, listeners responded earlier when the speaker’s final syllable was speeded than natural, regardless of context rate, suggesting they adjusted the timing of articulation after listening to a single syllable produced at a different rate. We replicated this final word effect in Experiment 2, which also showed that our speech rate manipulation did not influence the timing of response preparation. Together, these findings suggest listeners use speech rate information to time articulation when answering questions

    Prediction error boosts retention of novel words in adults but not in children

    Get PDF
    How do we update our linguistic knowledge? In seven experiments, we asked whether error-driven learning can explain under what circumstances adults and children are more likely to store and retain a new word meaning. Participants were exposed to novel object labels in the context of more or less constraining sentences or visual contexts. Both two-to-four-year-olds (Mage = 38 months) and adults were strongly affected by expectations based on sentence constraint when choosing the referent of a new label. In addition, adults formed stronger memory traces for novel words that violated a stronger prior expectation. However, preschoolers' memory was unaffected by the strength of their prior expectations. We conclude that the encoding of new word-object associations in memory is affected by prediction error in adults, but not in preschoolers

    Interference in the shared-Stroop task:a comparison of self- and other-monitoring

    Get PDF
    Co-actors represent and integrate each other's actions, even when they need not monitor one another. However, monitoring is important for successful interactions, particularly those involving language, and monitoring others' utterances probably relies on similar mechanisms as monitoring one's own. We investigated the effect of monitoring on the integration of self- and other-generated utterances in the shared-Stroop task. In a solo version of the Stroop task (with a single participant responding to all stimuli; Experiment 1), participants named the ink colour of mismatching colour words (incongruent stimuli) more slowly than matching colour words (congruent). In the shared-Stroop task, one participant named the ink colour of words in one colour (e.g. red), while ignoring stimuli in the other colour (e.g. green); the other participant either named the other ink colour or did not respond. Crucially, participants either provided feedback about the correctness of their partner's response (Experiment 3) or did not (Experiment 2). Interference was greater when both participants responded than when they did not, but only when their partners provided feedback. We argue that feedback increased interference because monitoring one's partner enhanced representations of the partner's target utterance, which in turn interfered with self-monitoring of the participant's own utterance
    • 

    corecore